microsoft copilot
AI might not be coming for lawyers' jobs anytime soon
AI might not be coming for lawyers' jobs anytime soon Generative AI might have aced the bar exam, but an LLM still can't think like a lawyer. When the generative AI boom took off in 2022, Rudi Miller and her law school classmates were suddenly gripped with anxiety. "Before graduating, there was discussion about what the job market would look like for us if AI became adopted," she recalls. So when it came time to choose a speciality, Miller--now a junior associate at the law firm Orrick--decided to become a litigator, the kind of lawyer who represents clients in court. She hoped the courtroom would be the last human stage. "Judges haven't allowed ChatGPT-enabled robots to argue in court yet," she says.
- North America > United States > Pennsylvania (0.04)
- North America > United States > Massachusetts (0.04)
- Law (1.00)
- Education > Educational Setting > Higher Education (0.70)
- Education > Curriculum > Subject-Specific Education (0.70)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.56)
How people used Microsoft Copilot in 2025, from coding to philosophy
When you purchase through links in our articles, we may earn a small commission. In the run-up to Valentine's Day, Microsoft saw a surge in conversations about relationships and personal development. Microsoft has released a new report showing what people used its AI assistant Copilot for in 2025. The analysis is based on 37.5 million de-identified conversations and shows that in addition to productivity, Copilot is used for health, relationships and personalized guidance. Health was particularly prevalent on mobile, with users turning to Copilot around the clock for tips on exercise, routines, and wellness.
Zero Data Retention in LLM-based Enterprise AI Assistants: A Comparative Study of Market Leading Agentic AI Products
Gupta, Komal, Shrivastava, Aditya
Governance of data, compliance, and business privacy matters, particularly for healthcare and finance businesses. Since the recent emergence of AI enterprise AI assistants enhancing business productivity, safeguarding private data and compliance is now a priority. With the implementation of AI assistants across the enterprise, the zero data retention can be achieved by implementing zero data retention policies by Large Language Model businesses like Open AI and Anthropic and Meta. In this work, we explore zero data retention policies for the Enterprise apps of large language models (LLMs). Our key contribution is defining the architectural, compliance, and usability trade-offs of such systems in parallel. In this research work, we examine the development of commercial AI assistants with two industry leaders and market titans in this arena - Salesforce and Microsoft. Both of these companies used distinct technical architecture to support zero data retention policies. Salesforce AgentForce and Microsoft Copilot are among the leading AI assistants providing much-needed push to business productivity in customer care. The purpose of this paper is to analyze the technical architecture and deployment of zero data retention policy by consuming applications as well as big language models service providers like Open Ai, Anthropic, and Meta.
- Asia > China (0.04)
- North America > United States (0.04)
- Asia > India > Haryana (0.04)
- Information Technology > Security & Privacy (1.00)
- Information Technology > Software (0.95)
GenAI on Wall Street -- Opportunities and Risk Controls
We give an overview on the emerging applications of GenAI in the financial industry, especially within investment banks. Inherent to these exciting opportunities is a new realm of risks that must be managed properly. By heeding both the Yin and Yang sides of GenAI, we can accelerate its organic growth while safeguarding the entire financial industry during this nascent era of AI.
- North America > United States > New York > New York County > New York City (0.50)
- North America > United States > Texas (0.04)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- (5 more...)
- Law (1.00)
- Information Technology (1.00)
- Banking & Finance > Trading (1.00)
- Health & Medicine > Therapeutic Area (0.93)
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.95)
- (4 more...)
Can AI Read Between The Lines? Benchmarking LLMs On Financial Nuance
Kubica, Dominick, Gordon, Dylan T., Emura, Nanami, Saini, Derleen, Goldenberg, Charlie
As of 2025, Generative Artificial Intelligence (GenAI) has become a central tool for productivity across industries. Beyond text generation, GenAI now plays a critical role in coding, data analysis, and research workflows. As large language models (LLMs) continue to evolve, it is essential to assess the reliability and accuracy of their outputs, especially in specialized, high-stakes domains like finance. Most modern LLMs transform text into numerical vectors, which are used in operations such as cosine similarity searches to generate responses. However, this abstraction process can lead to misinterpretation of emotional tone, particularly in nuanced financial contexts. While LLMs generally excel at identifying sentiment in everyday language, these models often struggle with the nuanced, strategically ambiguous language found in earnings call transcripts. Financial disclosures frequently embed sentiment in hedged statements, forward-looking language, and industry-specific jargon, making it difficult even for human analysts to interpret consistently, let alone AI models. This paper presents findings from the Santa Clara Microsoft Practicum Project, led by Professor Charlie Goldenberg, which benchmarks the performance of Microsoft's Copilot, OpenAI's ChatGPT, Google's Gemini, and traditional machine learning models for sentiment analysis of financial text. Using Microsoft earnings call transcripts, the analysis assesses how well LLM-derived sentiment correlates with market sentiment and stock movements and evaluates the accuracy of model outputs. Prompt engineering techniques are also examined to improve sentiment analysis results. Visualizations of sentiment consistency are developed to evaluate alignment between tone and stock performance, with sentiment trends analyzed across Microsoft's lines of business to determine which segments exert the greatest influence.
- Research Report (0.82)
- Financial News (0.72)
Do you want Microsoft Copilot sniffing your OneDrive files? Too late
Many Windows users look down on OneDrive and Copilot alike, so the combination of the two might seem like the worst of all worlds. Expect the new Copilot for OneDrive to be equally polarizing. Microsoft is launching Copilot for OneDrive for the Web, which has been exclusively a feature for business users until now. Today, Microsoft begins rolling it out to consumers -- on the cloud, and not on your PC. Many PC users detest Windows' OneDrive function, which launches, slurps up your data, and begins sending it to the cloud -- taking up CPU cycles and broadband bandwidth.
Good/Evil Reputation Judgment of Celebrities by LLMs via Retrieval Augmented Generation
Tsuchida, Rikuto, Yokoyama, Hibiki, Utsuro, Takehito
The purpose of this paper is to examine whether large language models (LLMs) can understand what is good and evil with respect to judging good/evil reputation of celebrities. Specifically, we first apply a large language model (namely, ChatGPT) to the task of collecting sentences that mention the target celebrity from articles about celebrities on Web pages. Next, the collected sentences are categorized based on their contents by ChatGPT, where ChatGPT assigns a category name to each of those categories. Those assigned category names are referred to as "aspects" of each celebrity. Then, by applying the framework of retrieval augmented generation (RAG), we show that the large language model is quite effective in the task of judging good/evil reputation of aspects and descriptions of each celebrity. Finally, also in terms of proving the advantages of the proposed method over existing services incorporating RAG functions, we show that the proposed method of judging good/evil of aspects/descriptions of each celebrity significantly outperform an existing service incorporating RAG functions.
How to turn off Copilot in Microsoft 365 and save a quick 30
It doesn't really matter: Chances are that if you've signed up for Microsoft 365, you're now paying more for Microsoft's integration of Copilot features, which was done without your consent. Microsoft is making available "classic" versions of its Microsoft 365 Personal and Microsoft 365 Family plans, that remove the Copilot surcharge. But, as Microsoft said when it announced the changes, these new Classic plans are available for a limited time. This article will show you how to switch back. Right now, here's what you'll see when you sign up for Microsoft 365, with the option to pay 12.99/mo and 9.99/mo, too. Those reflect the higher rates with the Copilot surcharge included.
The Artificial Intelligence Disclosure (AID) Framework: An Introduction
As the use of Generative Artificial Intelligence tools have grown in higher education and research, there have been increasing calls for transparency and granularity around the use and attribution of the use of these tools. Thus far, this need has been met via the recommended inclusion of a note, with little to no guidance on what the note itself should include. This has been identified as a problem to the use of AI in academic and research contexts. This article introduces The Artificial Intelligence Disclosure (AID) Framework, a standard, comprehensive, and detailed framework meant to inform the development and writing of GenAI disclosure for education and research.
- Information Technology > Security & Privacy (0.71)
- Education > Educational Setting (0.50)
Exploring Bengali Religious Dialect Biases in Large Language Models with Evaluation Perspectives
Wasi, Azmine Toushik, Islam, Raima, Islam, Mst Rafia, Rafi, Taki Hasan, Chae, Dong-Kyu
While Large Language Models (LLM) have created a massive technological impact in the past decade, allowing for human-enabled applications, they can produce output that contains stereotypes and biases, especially when using low-resource languages. This can be of great ethical concern when dealing with sensitive topics such as religion. As a means toward making LLMS more fair, we explore bias from a religious perspective in Bengali, focusing specifically on two main religious dialects: Hindu and Muslim-majority dialects. Here, we perform different experiments and audit showing the comparative analysis of different sentences using three commonly used LLMs: ChatGPT, Gemini, and Microsoft Copilot, pertaining to the Hindu and Muslim dialects of specific words and showcasing which ones catch the social biases and which do not. Furthermore, we analyze our findings and relate them to potential reasons and evaluation perspectives, considering their global impact with over 300 million speakers worldwide. With this work, we hope to establish the rigor for creating more fairness in LLMs, as these are widely used as creative writing agents.
- Asia > Bangladesh (0.15)
- North America > United States > Hawaii > Honolulu County > Honolulu (0.06)
- Asia > South Korea (0.04)
- (4 more...)